openai researcher
OpenAI Touts New AI Safety Research. Critics Say It's a Good Step, but Not Enough
OpenAI has faced opprobrium in recent months from those who suggest it may be rushing too quickly and recklessly to develop more powerful artificial intelligence. The company appears intent on showing it takes AI safety seriously. Today it showcased research that it says could help researchers scrutinize AI models even as they become more capable and useful. The new technique is one of several ideas related to AI safety that the company has touted in recent weeks. It involves having two AI models engage in a conversation that forces the more powerful one to be more transparent, or "legible," with its reasoning so that humans can understand what it's up to.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.75)
OpenAI's Long-Term AI Risk Team Has Disbanded
In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI's chief scientist and one of the company's cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power. Now OpenAI's "superalignment team" is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday's news that Sutskever was leaving the company, and the resignation of the team's other colead.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
OpenAI researchers spoke of AI breakthrough before CEO ouster
Ahead of OpenAI CEO Sam Altman's four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter said. The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft in solidarity with their fired leader. The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. A copy of the letter was unable to be reviewed for this report.
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.74)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.74)
OpenAI exec describes GPT-4 'recipe for producing magic'
OpenAI, the San Francisco A.I. lab that is now closely tied to Microsoft, says that GPT-4 is much more capable than the GPT-3.5 model underpinning the consumer version of ChatGPT. For one thing, GPT-4 is multi-modal: it can take in images as well as text, although it only outputs text. This opens up the ability of the A.I. model to "understand" photos and scenes. The new model performs much better than GPT-3.5 on a range of benchmark tests for natural language processing and computer vision algorithms. It also performs very well on a battery of diverse tests designed for humans, including a very impressive score on a simulated bar exam as well as scoring a five out of five on a wide range of Advanced Placement exams, from Math to Art History.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.74)
GPT-4 has arrived. It will blow ChatGPT out of the water.
The artificial intelligence research lab OpenAI on Tuesday launched the newest version of its language software, GPT-4, an advanced tool for analyzing images and mimicking human speech, pushing the technical and ethical boundaries of a rapidly proliferating wave of AI. OpenAI's earlier product, ChatGPT, captivated and unsettled the public with its uncanny ability to generate elegant writing, unleashing a viral wave of college essays, screenplays and conversations -- though it relied on an older generation of technology that hasn't been cutting-edge for more than a year. GPT-4, in contrast, is a state-of-the-art system capable of creating not just words but describing images in response to a person's simple written commands. When shown a photo of a boxing glove hanging over a wooden seesaw with a ball on one side, for instance, a person can ask what will happen if the glove drops, and GPT-4 will respond that it would hit the seesaw and cause the ball to fly up. The buzzy launch capped months of hype and anticipation over an AI program, known as a large language model, that early testers had claimed was remarkably advanced in its ability to reason and learn new things.
- Europe > Greece (0.05)
- Oceania > Australia (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Europe > Ukraine (0.05)
- Media (0.90)
- Government (0.69)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.77)
OpenAI releases AI tool that can produce an image from text
OpenAI researchers have created a new system that can produce a full image, including of an astronaut riding a horse, from a simple plain English sentence. Known as DALL·E 2, the second generation of the text to image AI is able to create realistic images and artwork at a higher resolution than its predecessor. The artificial intelligence research group won't be releasing the system to the public. The new version is able to create images from simple text, add objects into existing images, or even provide different points of view on an existing image. Developers imposed restrictions on the scope of the AI to ensure it could not produce hateful, racist or violent images, or be used to spread misinformation.
OpenAI releases Artificial Intelligence tool that can produce an image from text
OpenAI researchers have created a new system that can produce a full image, including of an astronaut riding a horse, from a simple plain English sentence. Known as DALL·E 2, the second generation of the text to image AI is able to create realistic images and artwork at a higher resolution than its predecessor. The artificial intelligence research group won't be releasing the system to the public, but hope to offer it as a plugin for existing image editing apps in the future. The new version is able to create images from simple text, add objects into existing images, or even provide different points of view on an existing image. Developers imposed restrictions on the scope of the AI to ensure it could not produce hateful, racist or violent images, or be used to spread misinformation.
Language models that can search the web hold promise -- but also raise concerns
Did you miss a session at the Data Summit? Language models -- AI systems that can be prompted to write essays and emails, answer questions, and more -- remain flawed in many ways. Because they "learn" to write from examples on the web, including problematic social media posts, they're prone to generating misinformation, conspiracy theories, and racist, sexist, or otherwise toxic language. Another major limitation of many of today's language models is that they're "stuck in time," in a sense. Because they're trained once on a large collection of text from the web, their knowledge of the world -- which they gain from that collection -- can quickly become outdated depending on when they were deployed.
- North America > United States > New York (0.06)
- Europe > Ukraine (0.05)
Researchers Warn Of 'Dangerous' Artificial Intelligence-Generated Disinformation At Scale - Breaking Defense
A "like" icon seen through raindrops. WASHINGTON: Researchers at Georgetown University's Center for Security and Emerging Technology (CSET) are raising alarms about powerful artificial intelligence technology now more widely available that could be used to generate disinformation at a troubling scale. The warning comes after CSET researchers conducted experiments using the second and third versions of Generative Pre-trained Transformer (GPT-2 and GPT-3), a technology developed by San Francisco company OpenAI. GPT's text-generation capabilities are characterized by CSET researchers as "autocomplete on steroids." "We don't often think of autocomplete as being very capable, but with these large language models, the autocomplete is really capable, and you can tailor what you're starting with to get it to write all sorts of things," Andrew Lohn, senior research fellow at CSET, said during a recent event where researchers discussed their findings.
- North America > United States > California > San Francisco County > San Francisco (0.24)
- Asia > China (0.15)
- North America > United States > District of Columbia > Washington (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Information Technology (1.00)
- Government > Military (1.00)
- Media > News (0.77)
- (2 more...)
Do Convolutional Networks Perform Better With Depth?
"Double descent does not happen through depth." The double descent curve, tells that increasing model capacity past the interpolation threshold can lead to a decrease in test error. Increasing neural network capacity through width leads to double descent. How does increase or reduction in-depth play out towards the end? A group of researchers from MIT have attempted to explore this question in their work titled, "Do Deeper Convolutional Networks Perform Better?".